Add Book to My BookshelfPurchase This Book Online

Chapter 7 - Building a TCP/IP Router-Based Network

Cisco TCP/IP Routing Professional Reference
Chris Lewis
  Copyright © 1999 The McGraw-Hill Companies, Inc.

IP Unnumbered and Data Compression
This section illustrates the use of two configuration options for Cisco routers that do not fall neatly in to any other section. Both IP unnumbered and data compression, useful when building an internetwork, are explained fully here.
IP Unnumbered
IP unnumbered has been explained in overview previously. The first benefit of IP unnumbered is that it allows you to save on IP address space. This is particularly important if you are using InterNIC-assigned addresses on your internetwork. The second benefit is that any interface configured as IP unnumbered can communicate with any other interface configured as IP unnumbered, and we need not worry about either interface being on the same subnet as the interface to which it is connected. This will be important when we design a dial backup solution.
The concept behind IP unnumbered is that you do not have to assign a whole subnet to a point-to-point connection. Serial ports on a router can "borrow" an IP address from another interface for communication over point-to-point links. This concept is best explored by using the three-router lab we put together earlier. The only change to the router setup is that router 1 and router 2 now are connected via their serial ports, as shown in Fig. 7-15.
Figure 7-15: Lab configuration of IP unnumbered investigation with working route updates
With the IP addressing scheme shown in Fig. 7-15, effectively one major network number with two subnets, the route information about subnets is maintained, as we can see by looking at the routing table on router 2 and router 3.
Router2>show ip route
120.0.0.0 255.255.255.224 is subnetted, 2 subnets
I120.1.1.32 [100/8576] via 120.1.1.33, 00:00:42, serial1
C120.1.1.0 is directly connected, serial 0
Router3>show ip route
120.0.0.0 255.255.255.224 is subnetted, 2 subnets
C120.1.1.32 is directly connected, ethernet 0
I120.1.1.0 [100/10476] via 120.1.1.2 00:00:19, serial1
This routing table shows us that router 3 has successfully learned about the 120.1.1.0 subnet from Serial 1, which is being announced from router 2 with the source address 120.1.1.2. This is as we would expect, as the Serial 1 interface on router 2 is borrowing the address from its Serial 0 interface. If the Serial 1 interfaces on both routers had their own addressing, we would expect the routing table in router 3 to indicate that it had learned of the 120.1.1.0 subnet from the address of the Serial 1 interface on router 2, not the borrowed one.
IP unnumbered is easy to break: Just change the address of the Ethernet interface on router 3 to 193.1.1.33 and see what happens, as illustrated in Fig. 7-16.
Figure 7-16: Lab configuration for IP unnumbered investigation with nonworking route updates
To speed the process of the routing table in router 2 adapting to the change in topology, issue the reload command when in privileged mode. After router 2 has completed its reload, try to ping 120.1.1.2 from router 3. The ping fails, so let's look at the routing table of router 3.
Router3>show ip route
120.0.0.0 255.255.255.224 is subnetted, 1 subnets
I120.1.1.0 [100/10476] via 120.1.1.2, 00:00:18, serial 1
193.1.1.0 255.255.255.224 is subnetted, 1 subnets
C193.1.1.32 is directly connected, Ethernet 0
Everything here appears to be fine, so let's examine the routing table of router 2 to see if we can determine the problem.
Router2>show ip route
120.0.0.0 255.255.255.224 is subnetted, 1 subnets
C120.1.1.0 is directly connected, serial 0
193.1.1.0 is variably subnetted, 2 subnets, 2 masks
I193.1.1.0 255.255.255.0 [100/8576] via 193.1.1.33 00:01:15, serial 1
I193.1.1.32 255.255.255.255 [100/8576] via 193.1.1.33 00:01:15, serial 1
Straight away, the words variably subnetted should make you aware that something is wrong. From the discussion on IGRP in Chap. 4, we know that IGRP does not handle variable-length subnet masks properly, so any variable-length subnetting is likely to cause us problems. Looking more closely, we see that router 2 is treating 193.1.1.32 as a host address, because it assigned a netmask of 255.255.255.255 to that address. Treating 193.1.1.32 this way means that there is no entry for the 193.1.1.32 subnet, and therefore no way to reach 193.1.1.33.
The reason behind all this is that subnet information is not transported across major network number boundaries and an IP unnumbered link is treated in a similar way to a boundary between two major network numbers. So router 3 will advertise the 193.1.1.32 subnet to router 2, which is not expecting subnet information; it is expecting only major network information, so it treats 193.1.1.32 as a host.
A simple rule by which to remember this is that the network numbers on either side of an IP unnumbered link can use netmasks only if they belong to the same major network number.
Data Compression
Once you have optimized the traffic over your internetwork by applying access lists to all appropriate types of traffic and periodic updates, you can further improve the efficiency of WAN links by data compression, a cost-effective way to improve the throughput available on a given link. If you have a 64 kbps link that is approaching saturation point, your choices are to upgrade to a 128 kbps link or to see if you can get more out of the existing link by data compression.
All data compression schemes work on the basis of two devices connected on one link, both running the same algorithm that compresses the data on the transmitting device and decompresses the data on the receiving device. Conceptually, two types of data compression can be applied, depending on the type of data being transported. These two types of data are lossy and lossless.
At first, a method called “lossy”—a name that implies it will lose data—might not seem particularly attractive. There are types of data, however, such as voice or video transmission, that are still usable despite a certain amount of lost data. Allowing some lost data significantly increases the factor by which data can be compressed. JPEG and MPEG are examples of lossy data compression. In our case, though, we are building networks that need to deliver data to computers that typically do not accept any significant data loss. Lossless data compression comes either as statistical or dictionary form. The statistical method is not particularly applicable here, as it relies on the traffic that is being compressed to be consistent and predictable, when internetwork traffic tends to be neither. Cisco’s data compression methods, STAC and Predictor, are based on the dictionary style of compression. These methods rely on the two communicating devices sharing a common dictionary that maps special codes to actual traffic patterns. STAC is based on the Lempel-Ziv algorithm that identifies commonly transmitted sequences and replaces those sequences in the data stream with a smaller code. This code is then recognized at the receiving end, extracted from the data stream, and the original sequence inserted in the data stream. In this manner, less data is sent over the WAN link even as transmission of the same raw data is permitted.
Predictor tries to predict the next sequence of characters, based upon a statistical analysis of what was transmitted previously. My experience has led me to use the STAC algorithm, which, although it is more CPU-intensive than Predictor, requires less memory to operate. No matter which method you choose, you can expect an increase in latency. Compression algorithms delay the passage of data through an internetwork. While typically this is not very significant, some client/server applications that are sensitive to timing issues may be disrupted by the operation of data compression algorithms.
One of the great marketing hypes of the networking world has been the advertisement of impressive compression rates from vendors offering data compression technology. To its credit, Cisco has always advertised the limitations of compression technology as well as its potential benefits. First of all, there is no such thing as a definitive value for the compression rate you will get on your internetwork. Compression rates are totally dependent on the type of traffic being transmitted. If your traffic is mainly ASCII text with 70 to 80 percent data redundancy, you may get a compression ratio near 4:1. A typical target for internetworks that carry a mix of traffic is more realistically 2:1. When you implement data compression, you must keep a close eye on the processor and memory utilization. The commands to monitor these statistics are show proc and show proc mem, respectively.
Implementing Data Compression.     The first data compression technique to which most people are introduced is header compression, as implemented by the Van Jacobson algorithm. This type of compression can deliver benefits when the traffic transported consists of many small packets, such as Telnet traffic. The processing requirements for this type of compression are high, and it is therefore rarely implemented on links with greater throughput than 64 kbps. The most popular implementation of Van Jacobson header compression is for asynchronous links, where it is implemented on a per-interface basis. Many popular PC PPP stacks that drive asynchronous communication implement this type of header compression as default.
The following enables Van Jacobson header compression on interface Async 1, where the passive keyword suppresses compression until a compressed header has been received from the computer connecting to this port.
Router1(config)#interface async 1
Router1(config-int)#ip tcp header-compression passive
Clearly, Van Jacobson header compression works only for IP traffic. Better overall compression ratios can be achieved by compressing the whole data stream coming out of an interface by using per-interface compression, which is protocol-independent. You can use STAC or Predictor to compress the entire data stream, which then is encapsulated again in another Data Link level protocol such as PPP or LAPB to ensure error correction and packet sequencing. It is necessary to re-encapsulate the compressed data, as the original header will have been compressed along with the actual user data and therefore will not be readable by the router receiving the compressed data stream.
A clear disadvantage to this type of compression is that, if traffic has to traverse many routers from source to destination, potentially significant increases in latency may occur. That can happen because traffic is compressed and decompressed at every router through which the traffic passes. This is necessary for the receiving router to read the uncompressed header information and decide where to forward the packet.
Per-interface compression delivers typical compression rates of 2:1 and is therefore worth serious consideration for internetworks with fewer than 10 routers between any source and destination. This type of compression can be implemented for point- to-point protocols such as PPP or the default Cisco HDLC. The following example shows implementation for PPP on the Serial 0 interface. For this method to work, both serial ports connected on the point-to-point link must be similarly configured for compression.
Router1(config)#interface serial 0
Router1(config-int)#encapsulation ppp
Router1(config-int)#compress stac
The per-interface type of compression that requires each router to decompress the received packet before it can be forwarded is not applicable for use on a public network such as frame relay or X.25. The devices within the public network probably will not be configured to decompress the packets in order to determine the header information needed to forward packets within the public network. For connection to public networks, Cisco supports compression on a per-virtual-circuit basis. This type of compression leaves the header information intact and compresses only the user data being sent. An example of implementing this type of configuration for an X.25 network is shown as follows. As with per-interface compression, both serial ports that are connected (this time via a public network), must be similarly configured.
Router1(config)#interface serial 0
Router1(config-int)#x25 map compressedtcp 193.1.1.1 1234567879 compress
This command implements TCP header compression with the compressedtcp keyword. Per-virtual-circuit compression is implemented via the compress keyword at the end of the command. Great care should be taken when you are considering implementing compression on a per-virtual-circuit basis. Each virtual circuit needs its own dictionary, which quickly uses up most of the available memory in a router. My happiest experiences with compression have been with per-interface compression on networks with a maximum hop count of around 6 or 7 from any source to any destination.

 


 
Books24x7.com, Inc © 2000 –  Feedback